Sampling, Metric Entropy, and Dimensionality Reduction
نویسندگان
چکیده
منابع مشابه
Adaptive Metric Dimensionality Reduction
We study data-adaptive dimensionality reduction in the context of supervised learning in general metric spaces. Our main statistical contribution is a generalization bound for Lipschitz functions in metric spaces that are doubling, or nearly doubling, which yields a new theoretical explanation for empirically reported improvements gained by preprocessing Euclidean data by PCA (Principal Compone...
متن کاملRobust inversion, dimensionality reduction, and randomized sampling
We consider a class of inverse problems in which the forward model is the solution operator to linear ODEs or PDEs. This class admits several dimensionality-reduction techniques based on data averaging or sampling, which are especially useful for large-scale problems. We survey these approaches and their connection to stochastic optimization. The data-averaging approach is only viable, however,...
متن کاملSpectral Dimensionality Reduction via Maximum Entropy
We introduce a new perspective on spectral dimensionality reduction which views these methods as Gaussian random fields (GRFs). Our unifying perspective is based on the maximum entropy principle which is in turn inspired by maximum variance unfolding. The resulting probabilistic models are based on GRFs. The resulting model is a nonlinear generalization of principal component analysis. We show ...
متن کاملOn Point Sampling Versus Space Sampling for Dimensionality Reduction
In recent years, random projection has been used as a valuable tool for performing dimensionality reduction of high dimensional data. Starting with the seminal work of Johnson and Lindenstrauss [8], a number of interesting implementations of the random projection techniques have been proposed for dimensionality reduction. These techniques are mostly space symmetric random projections in which r...
متن کاملJoint Dimensionality Reduction and Metric Learning: A Geometric Take
To be tractable and robust to data noise, existing metric learning algorithms commonly rely on PCA as a pre-processing step. How can we know, however, that PCA, or any other specific dimensionality reduction technique, is the method of choice for the problem at hand? The answer is simple: We cannot! To address this issue, in this paper, we develop a Riemannian framework to jointly learn a mappi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Mathematical Analysis
سال: 2015
ISSN: 0036-1410,1095-7154
DOI: 10.1137/130944436